3 research outputs found

    Testing Basado en la Busqueda en TESTAR

    Full text link
    [ES] Las interfaces gráficas de usuario (IGU) constituyen un punto vital para testear una aplicación. Para ello se han desarrollado diversas herramientas automáticas, que, en su mayoría, utilizan algoritmos en los que las acciones a ejecutar en cada paso se deciden aleatoriamente. Esto es eficaz en aquellas aplicaciones inmaduras que han sido poco testeadas y presentan muchos errores. Dotar de “inteligencia” a los mecanismos de selección de acciones constituiría un importante avance para conseguir una mayor implantación de las herramientas de testeo automatizado, lo que redundaría en un incremento de la calidad del software. Éste es precisamente el objetivo de este trabajo. Para conseguirlo, se ha utilizado un enfoque basado en búsqueda (o search-based) que convierte el testeo en un problema de optimización. Nuestro punto de partida es la herramienta TESTAR, desarrollada en el ámbito del proyecto de investigación europeo FITTEST. Se han utilizado y evaluado dos métodos: Q-learning y programación genética. Otro resultado importante son la definición de las métricas apropiadas para optimizar; en este trabajo se han introducido cuatro nuevas métricas. La combinación de los algoritmos search-based con estas métricas ha permitido obtener resultados muy prometedores, que redundarán en la mejora de TESTAR.[EN] Graphic User Interfaces (GUI) are a main entry point to test an application. Different automated tools to test at the GUI level exist. Those that automate the design of test cases usually use random algorithms to choose the action that should be executed next in the test sequence. This technique is quite useful in applications that are immature, have been poorly tested or present many errors. To give more “intelligence” to this action selection mechanism, in this work we suppose a great development in the implantation of the automated testing tools. This improvement will result in better testing. To achieve this, we use search-based techniques to transform the problem into an optimization one. Our starting point is the tool called TESTAR, a tool developed during an EU research Project called FITTEST. Two different methods have been implemented and evaluated: Q-learning and genetic programming. Another results of our work is the definition of metrics that guide the optimization properly. Four new and different metrics have been introduced. The combination between metrics and search-based algorithms has been assessed and promising results have been obtained that will increase TESTAR capabilities.Almenar Pedrós, F. (2016). Testing Basado en la Busqueda en TESTAR. http://hdl.handle.net/10251/71699.TFG

    Técnicas de paralelización de código para robots basados en emociones

    Full text link
    [ES] Las arquitecturas de control basadas en emociones se están convirtiendo en una de las soluciones más prometedoras a la hora de implementar los sistemas de los robots. Los encargados de controlar estos sistemas son los procesos emocionales, que sirven de guía para el robot a la hora tomar una decisión sobre qué comportamientos se han de activar para completar sus objetivos. El número de estos procesos emocionales se incrementa en gran medida conforme lo hace la complejidad del problema, haciendo que la capacidad de cálculo de un procesador de un solo núcleo no sea suficiente para este trabajo. Por suerte, estos sistemas son altamente paralelizables y por lo tanto se puede incrementar la capacidad de cálculo del equipo en gran medida empleando técnicas de paralelización. En este TFG vamos a emplear distintas técnicas de paralelización con el objetivo de acelerar el cálculo de las emociones que determinarán el comportamiento de los robots. Para cumplir este objetivo utilizaremos y compararemos Unidades de Procesamiento Gráfico (GPU) con procesadores de múltiples núcleos y con el uso de instrucciones SIMD (Single Instruction Multiple Data).[EN] Control architectures based on emotions are becoming promising solutions for the implementation of future robotic systems. The basic controllers of this architecture are the emotional processes that decide which behaviors the robot must activate to fulfill the objectives. The number of emotional processes increases (hundreds of millions/s) with the complexity level of the application, limiting the processing capacity of the main processor to solve the complex problems. Fortunately, the potential parallelism of emotional processes permits their execution in parallel, hence enabling the power computing to tackle the complex dynamic problems. In this paper, Graphic Processing Unit (GPU), multicore processors and single instruction multiple data (SIMD) instructions are used to provide parallelism for the emotional processes.Almenar Pedros, F. (2014). Técnicas de paralelización de código para robots basados en emociones. http://hdl.handle.net/10251/48385.TFG

    Using genetic programming to evolve action selection rules in traversal-based automated software testing: results obtained with the TESTAR tool

    Full text link
    [EN] Traversal-based automated software testing involves testing an application via its graphical user interface (GUI) and thereby taking the user's point of view and executing actions in a human-like manner. These actions are decided on the fly, as the software under test (SUT) is being run, as opposed to being set up in the form of a sequence prior to the testing, a sequence that is then used to exercise the SUT. In practice, random choice is commonly used to decide which action to execute at each state (a procedure commonly referred to as monkey testing), but a number of alternative mechanisms have also been proposed in the literature. Here we propose using genetic programming (GP) to evolve such an action selection strategy, defined as a list of IF-THEN rules. Genetic programming has proved to be suited for evolving all sorts of programs, and rules in particular, provided adequate primitives (functions and terminals) are defined. These primitives must aim to extract the most relevant information from the SUT and the dynamics of the testing process. We introduce a number of such primitives suited to the problem at hand and evaluate their usefulness based on various metrics. We carry out experiments and compare the results with those obtained by random selection and also by Q-learning, a reinforcement learning technique. Three applications are used as Software Under Test (SUT) in the experiments. The analysis shows the potential of GP to evolve action selection strategies.Esparcia Alcázar, AI.; Almenar-Pedrós, F.; Vos, TE.; Rueda Molina, U. (2018). Using genetic programming to evolve action selection rules in traversal-based automated software testing: results obtained with the TESTAR tool. Memetic Computing. 10(3):257-265. https://doi.org/10.1007/s12293-018-0263-8S257265103Aho P, Menz N, Rty T (2013) Dynamic reverse engineering of GUI models for testing. In: Proceedings of 2013 international conference on control, decision and information technologies (CoDIT’13)Aho P, Oliveira R, Algroth E, Vos T (2016) Evolution of automated testing of software systems through graphical user interface. In: Procs. of the 1st international conference on advances in computation, communications and services (ACCSE 2016), Valencia, pp 16–21Alegroth E, Feldt R, Ryrholm L (2014) Visual GUI testing in practice: challenges, problems and limitations. Empir Softw Eng 20:694–744. https://doi.org/10.1007/s10664-013-9293-5Barr ET, Harman M, McMinn P, Shahbaz M, Yoo S (2015) The oracle problem in software testing: a survey. IEEE Trans Softw Eng 41(5):507–525Bauersfeld S, Vos TEJ (2012) A reinforcement learning approach to automated GUI robustness testing. In: Fast abstracts of the 4th symposium on search-based software engineering (SSBSE 2012), pp 7–12Bauersfeld S, de Rojas A, Vos T (2014) Evaluating rogue user testing in industry: an experience report. In: 2014 IEEE eighth international conference on research challenges in information science (RCIS), pp 1–10. https://doi.org/10.1109/RCIS.2014.6861051Bauersfeld S, Vos TEJ, Condori-Fernández N, Bagnato A, Brosse E (2014) Evaluating the TESTAR tool in an industrial case study. In: 2014 ACM-IEEE international symposium on empirical software engineering and measurement, ESEM 2014, Torino, Italy, September 18–19, 2014, p 4Bauersfeld S, Wappler S, Wegener J (2011) A metaheuristic approach to test sequence generation for applications with a GUI. In: Cohen MB, Ó Cinnéide M (eds) Search based software engineering: third international symposium, SSBSE 2011, Szeged, Hungary, September 10-12, 2011. Proceedings. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 173–187Brameier MF, Banzhaf W (2010) Linear genetic programming, 1st edn. Springer, New YorkChaudhary N, Sangwan O (2016) Metrics for event driven software. Int J Adv Comput Sci Appl 7(1):85–89Esparcia-Alcázar AI, Almenar F, Martínez M, Rueda U, Vos TE (2016) Q-learning strategies for action selection in the TESTAR automated testing tool. In: Proceedings of META 2016 6th international conference on metaheuristics and nature inspired computing, pp 174–180Esparcia-Alcázar AI, Almenar F, Rueda U, Vos TEJ (2017) Evolving rules for action selection in automated testing via genetic programming–a first approach. In: Squillero G, Sim K (eds) Applications of evolutionary computation: 20th European conference, evoapplications 2017, Amsterdam, The Netherlands, April 19–21, 2017, Proceedings, part II. Springer, pp 82–95. https://doi.org/10.1007/978-3-319-55792-2_6Esparcia-Alcázar AI, Moravec J (2013) Fitness approximation for bot evolution in genetic programming. Soft Comput 17(8):1479–1487. https://doi.org/10.1007/s00500-012-0965-7He W, Zhao R, Zhu Q (2015) Integrating evolutionary testing with reinforcement learning for automated test generation of object-oriented software. Chin J Electron 24(1):38–45Koza JR (1992) Genetic programming: on the programming of computers by means of natural selection. MIT Press, CambridgeLehman J, Stanley KO (2011) Novelty search and the problem with objectives. In: Riolo R, Vladislavleva E, Moore JH (eds) Genetic programming theory and practice IX, genetic and evolutionary computation. Springer, New York, pp 37–56Memon AM, Soffa ML, Pollack ME (2001) Coverage criteria for GUI testing. In: Proceedings of ESEC/FSE 2001, pp 256–267Rueda U, Vos TEJ, Almenar F, Martínez MO, Esparcia-Alcázar AI (2015) TESTAR: from academic prototype towards an industry-ready tool for automated testing at the user interface level. In: Canos JH, Gonzalez Harbour M (eds) Actas de las XX Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2015), pp 236–245Seesing A, Gross HG (2006) A genetic programming approach to automated test generation for object-oriented software. Int Trans Syst Sci Appl 1(2):127–134Vos TE, Kruse PM, Condori-Fernández N, Bauersfeld S, Wegener J (2015) TESTAR: tool support for test automation at the user interface level. Int J Inf Syst Model Des 6(3):46–83. https://doi.org/10.4018/IJISMD.2015070103Wappler S, Wegener J (2006) Evolutionary unit testing of object-oriented software using strongly-typed genetic programming. In: Proceedings of the 8th annual conference on genetic and evolutionary computation, GECCO’06. ACM, New York, NY, USA, pp 1925–1932. URL https://doi.org/10.1145/1143997.1144317Watkins C (1989) Learning from delayed rewards. Ph.D. Thesis. Cambridge Universit
    corecore